Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Schwartz, Peter J; Hohil, Myron E (Ed.)Free, publicly-accessible full text available May 28, 2026
-
Mobile robot navigation is a critical aspect of robotics, with applications spanning from service robots to industrial automation. However, navigating in complex and dynamic environments poses many challenges, such as avoiding obstacles, making decisions in real-time, and adapting to new situations. Reinforcement Learning (RL) has emerged as a promising approach to enable robots to learn navigation policies from their interactions with the environment. However, application of RL methods to real-world tasks such as mobile robot navigation, and evaluating their performance under various training–testing settings has not been sufficiently researched. In this paper, we have designed an evaluation framework that investigates the RL algorithm’s generalization capability in regard to unseen scenarios in terms of learning convergence and success rates by transferring learned policies in simulation to physical environments. To achieve this, we designed a simulated environment in Gazebo for training the robot over a high number of episodes. The training environment closely mimics the typical indoor scenarios that a mobile robot can encounter, replicating real-world challenges. For evaluation, we designed physical environments with and without unforeseen indoor scenarios. This evaluation framework outputs statistical metrics, which we then use to conduct an extensive study on a deep RL method, namely the proximal policy optimization (PPO). The results provide valuable insights into the strengths and limitations of the method for mobile robot navigation. Our experiments demonstrate that the trained model from simulations can be deployed to the previously unseen physical world with a success rate of over 88%. The insights gained from our study can assist practitioners and researchers in selecting suitable RL approaches and training–testing settings for their specific robotic navigation tasks.more » « lessFree, publicly-accessible full text available November 27, 2025
-
Manser, Kimberly E.; Rao, Raghuveer M.; Howell, Christopher L. (Ed.)Deep Q-learning (DQL) method has been proven a great success in autonomous mobile robots. However, the routine of DQL can often yield improper agent behavior (multiple circling-in-place actions) that comes with long training episodes until convergence. To address such problem, this project develops novel techniques that improve DQL training in both simulations and physical experiments. Specifically, the Dynamic Epsilon Adjustment method is integrated to reduce the frequency of non-ideal agent behaviors and therefore improve the control performance (i.e., goal rate). A Dynamic Window Approach (DWA) global path planner is designed in the physical training process so that the agent can reach more goals with less collision within a fixed amount of episodes. The GMapping Simultaneous Localization and Mapping (SLAM) method is also applied to provide a SLAM map to the path planner. The experiment results demonstrate that our developed approach can significantly improve the training performance in both simulation and physical training environment.more » « less
-
We study the discretization of a linear evolution partial differential equation when its Green’s function is known or well approximated. We provide error estimates both for the spatial approximation and for the time stepping approximation. We show that, in fact, an approximation of the Green function is almost as good as the Green function itself. For suitable time-dependent parabolic equations, we explain how to obtain good, explicit approximations of the Green function using the Dyson-Taylor commutator method that we developed in J. Math. Phys. 51 (2010), n. 10, 103502 (reference [15]). This approximation for short time, when combined with a bootstrap argument, gives an approximate solution on any fixed time interval within any prescribed tolerance.more » « less
-
Abstract Renewable fuel generation is essential for a low carbon footprint economy. Thus, over the last five decades, a significant effort has been dedicated towards increasing the performance of solar fuels generating devices. Specifically, the solar to hydrogen efficiency of photoelectrochemical cells has progressed steadily towards its fundamental limit, and the faradaic efficiency towards valuable products in CO2reduction systems has increased dramatically. However, there are still numerous scientific and engineering challenges that must be overcame in order to turn solar fuels into a viable technology. At the electrode and device level, the conversion efficiency, stability and products selectivity must be increased significantly. Meanwhile, these performance metrics must be maintained when scaling up devices and systems while maintaining an acceptable cost and carbon footprint. This roadmap surveys different aspects of this endeavor: system benchmarking, device scaling, various approaches for photoelectrodes design, materials discovery, and catalysis. Each of the sections in the roadmap focuses on a single topic, discussing the state of the art, the key challenges and advancements required to meet them. The roadmap can be used as a guide for researchers and funding agencies highlighting the most pressing needs of the field.more » « less
An official website of the United States government
